perm filename DENNET[F88,JMC] blob sn#884287 filedate 1990-05-03 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00011 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	\input memo.tex[let,jmc]
C00004 00003	%dennet[f88,jmc]	Review of The Intentional Stance and Elbow Room
C00007 00004		In his earlier work, Dennett (1971), reprinted in his (1977),
C00011 00005	 {\it Elbow Room} is concerned with the problem of what there is
C00016 00006		Some philosophers claim that philosophy is just the sum of the
C00017 00007	\noindent References:
C00018 00008	Notes:
C00021 00009	\smallskip\centerline{Copyright \copyright\ 1989 by John McCarthy}
C00022 00010	Also we can hope to get more useful work out of philosophers
C00024 00011		In his chapter ``Beyond Belief'' Dennett tells a story called
C00027 ENDMK
C⊗;
\input memo.tex[let,jmc]
\title{Review of three books by Daniel Dennett}
%dennet[f88,jmc]	Review of The Intentional Stance and Elbow Room

	AI has in common with philosophy topics in metaphysics,
ontology, epistemology and philsophy of mind.  Philosophers have
put more than 2,000 years of work into these matters, so it
behooves AI researchers to consider what they have discovered
that we might be able to use.  If we can't use the results of
philosophers' investigations, we need to know the reason why.  We
can also consider what new light AI has to offer philosophy.

	What AI has in common with philosophy is a lot more than
what physics has in common with what the philosophers call
``philosophy of physics'' or what mathematics has in common with
the philosophy of mathematics.  A general AI system, e.g. a
mobile robot, needs a general view of the world into which
particular information is fitted and actions are planned.  This
overlaps metaphysics.  It must have views about what it and other
entities know and can know.  Designing this involves
epistemology.  The variables in its programs and internal
databases must range over certain classes of entities, and this
is ontology.  It must make computations about which future states
are achievable by itself and other ``persons'' and which are not,
and this is related to the problem of free will.

	Compared to this, haggling over whether the intuitive concept
of intelligence necessarily requires a human-type body is dull stuff.
Unfortunately, this is what constitutes much of the attention
philosophers have paid to AI.
	In his earlier work, Dennett (1971), reprinted in his (1977),
introduces three ``stances'' from which one can regard a physical
system---the physical stance, the design stance and the intentional
stance.  Here are the distinctions, presented with the aid of some
examples.

	The physical stance looks at the object, say an alarm clock,
as a physical object made of parts interacting physically in some
manner.  For example, a mechanical alarm clock has gears and either
a wound up spring or an electric motor, and an electronic alarm
clock has a quartz crystal and a count-down circuit, etc.

	The design stance focusses attention on what the clock does,
how one sets the time and the alarm, how one reads it and the
fact that it keeps time.  Whether it is mechanical or electronic
is irrelevant to the design stance.

	Dennett's main point about the design stance is that it
is not a mere abbreviation of the physical stance.  One can know
all one needs to know about a normally functioning alarm clock
from the design stance, and the physical implementation of some
system can be changed without the user of the system being affected
or even interested.  Typically, the user of a hotel alarm clock
doesn't even think about whether the clock is electronic or mechanical.

	From the AI point of view, we shall often want our programs
to take the design stance towards systems with which they deal.
For many purposes this suffices.
However, they sometimes need to relate the design stance to a
physical stance.  This is needed when an implementation of
a system that has been functionally designed is being designed.
It is also necesary when a system breaks down.
 {\it Elbow Room} is concerned with the problem of what there is
in the way of free will.  As a philosopher, Dennett is primarily
concerned with human free will, although he mentions machines.
He also asks what kind of free will people would want to have.

	It seems that philosophers divide themselves into
``compatibilists'' and ``incompatibilists'' when they consider free
will.  The compatibilists hold that free will and determinism are
compatible and incompatibilists hold them to be incompatible and sometimes
look to quantum mechanics as providing a way out.  We roboticists had
better build our robots to be compatibilists---at least with regard
to their concrete actions.  For example, suppose we want a robot to
paint a step ladder and the ceiling (McDermott 1982).  Suppose it
said,

{\narrow ``Well, I could paint the ceiling first or I could paint
the step ladder first.  Whoops!  Wait a minute!  I'm a robot and
a deterministic device, at least when I function correctly.
Therefore, it's meaningless to say that I can do either.  I will
do whichever of the two I am fated to do.''}

	Instead, we want the robot to use the word ``can'' (or rather
a suitably corresponding term in its internal language) in such a way
that it will infer that it can paint the ceiling and the step ladder
in either order and then decide that painting the ceiling first
gives a better result, since it avoids getting paint on the
robot's feet.

	(McCarthy and Hayes 1969) discusses {\it can} concretely in
terms of systems of finite automata.  We define sentences like ``In a
certain initial configuration, automaton 1 can put automaton 3 in
state 7 by time 10, but it won't''.  The above sentence is considered
true if there is a sequence of signals along the output lines of
automaton 1 that would put automaton 3 in state 7 by time 10, but the
actual sequence of signals emitted by automaton 1 in the given initial
configuration does not have this effect.  Thus the question about what
automaton 1 can do becomes a question about an automaton system that
has external inputs replacing the outputs of automaton 1.  When we
want to define {\it can} for a set of initial states, the problem is
more complex and is also treated in the 1969 paper.  Unfortunately,
Dennett doesn't reach this level of concreteness in this direction or
any other.

	If a robot uses this notion of {\it can}, it avoids paradox,
because it doesn't have to predict its own actions.  Moreover,
it gets the same kind of analysis of its possibilities that
people get when they analyze similar situations.
	Some philosophers claim that philosophy is just the sum of the
separate sciences.  One can't be sure about philosophy, but AI
requires more --- unless the study of the common sense world is to be
considered one of the sciences.  If so, it has an epistemological
character quite different from the usual sciences.

Shannon formula

$$U = (log↓2 K)/over R.$$
\noindent References:

{\bf Deavours, Cipher A. (1977)}: ``Unicity Points in Cryptanalysis'',
 {\it Cryptologia}, Vol. 1, No. 1, Jan 1977, reprinted in
 {\it Cryptology: Yesterday, Today and Tomorrow} by Deavours, et. al.
Artech House.

{\bf McDermott, D. (1982)}: ``A temporal logic for reasoning about
processes and plans'', {\it Cognitive Science} , 6, pp. 101-155.

{\bf Shannon, C. E. (1949)}: ``Communications Theory of Secrecy
Systems'', {\it Bell System Technical Journal}, 28, pp. 656-715.
Notes:

Dennett's good ideas.

1. Dennett's 3 stances and their relations.  The intentional stance
isn't just an abbreviation of the physical stance.  The battle over
the thermostat.

2. Intuition pumps.

The history of philosophy is a catalog of failures.  Philosophy
is rather difficult.  However, it seems to me that the 20th
century has made available new tools, logic and computer science
that will make possible genuine progress.

In principle, philosophy needn't be affected by science, but
philosophers have a history of mistakenly supposing they have
proved statements that later turn out to be matters of science,
and they are often wrong.  The most famous example is Kant
supposing that Euclidean geometry is something that our
minds necessarily impose upon the world.

Because of approximate concepts and theories the hard cases in
which philosophers delight make bad theories.

Are propositions to be defined by brain states or in terms of
possible worlds.  The central usage of propositions occurs
when the two fit nicely together.  As soon as you separate
them you are on te fringe.

The philosophers behave as though they consider the ordinary
usage as unproblematical.

We AIers should not let philosophical theories mislead us.
I suppose the AI work on sequence extrapolation is an
example of this.

Some philosophers, e.g. Hubert Dreyfus, consider AI impossible,
because a computer can't behave in a sufficiently flexible way to
exhibit common sense or expertise.  Dreyfus said computers
couldn't have ``ambiguity tolerance'', ``fringe consciousness''
and the ability to ``zero in''.  Others, e.g. John Searle,
make no claims about what computers can't do but say it
doesn't count as having beliefs, etc.
\smallskip\centerline{Copyright \copyright\ 1989 by John McCarthy}
\smallskip\noindent{This draft of dennet[f88,jmc]
\TEX ed on \jmcdate\ at \theTime}
\vfill\eject\end

Also we can hope to get more useful work out of philosophers
in the future, especially if we can persuade some of them to consider
some of their traditional topics in a more concrete way and
to use formalisms definite enough to be subsequently adaptable to AI use.
Among contemporary philosophers, Daniel Dennett is one of the
closest to the AI community in attitude, but there are important
differences, copiously illustrated in the above-mentioned books,
that make his work less useful to us than it might otherwise be.

	Part of the problem is that philosophers have many objectives
quite different from the scientific and engineering objectives of
AI research.  However, there is a lot in common, and attention
to certain more concrete matters suggested by AI may help them
achieve even purely philosophical objectives.
	In his chapter ``Beyond Belief'' Dennett tells a story called
the Ballad of Shakey's Pizza Parlor.  Tom goes to a Shakey's Pizza
Parlor in Riverside where he carves his initials in the men's room
wall.  His friends give him a Mickey Finn and transport him to a
Shakey's Pizza Parlor in Westwood where Tom wakes up.  Dennett asks
whether his belief that he has carved his initials on the men's
room wall is a correct belief about the men's room in the Riverside
Shakey's or an incorrect belief about the men's room in Westwood.
There are more examples and more considerations, but this gives
the flavor.

	My opinion is that {\it belief about} is a concept of
limited applicability, just as the manufacturer's specifications
for a D flipflop are of limited applicability.  Dennett's question
about which men's room Tom's belief is about violates the ``design
rules'' for {\it aboutness}.  It might be that our intuitive ideas
of {\it aboutness} admit a nice extension to cover this case or
maybe they don't, but an extension is what would be involved.
  This view of the matter isn't really
contrary to Dennett's position, but AI suggests a different question
about {\it about}.

	This is to include with our robot ``design rules'' for applying
words like ``about'' to its beliefs.  The manufacturer of the robot
guarantees that if the robot is used in accordance with the specifications,
then regarding certain of its beliefs and statements as about something
will not lead to failure of the robot to achieve its goals.

	What are sufficient conditions for beliefs being about
something?